12 research outputs found

    Analyzing the Effects of Role Configuration in Logistics Processes using Multiagent-Based Simulation: An Interdisciplinary Approach

    Get PDF
    Current trends like the digital transformation and Industry 4.0 are challenging logistics management: flexible process development and optimization has been a primary concern in research in the last two decades. However, flexibility is limited by its underlying distribution of action and task knowledge. Thus, our objective is to develop an approach to optimize performance of logistics processes by dynamic (re-) configuration of knowledge in teams. One of the key assumptions for that approach is, that the distribution of knowledge has impact on team‘s performance. Consequently, we propose a formal specification for representing active resources (humans or smart machines) and distribution of action knowledge in multiagent-based simulation. In the second part of this paper, we analyze process quality in a psychologically validated laboratory case study. Our simulation results support our assumption, i.e., the results show that there is significant influence of knowledge distribution on process quality

    Assessing digital self-efficacy: Review and scale development

    Get PDF
    Today, digitalization is affecting all areas of life, such as education or work. The competent use of digital systems (esp. information and communication technologies [ICT]) has thus become an essential skill. Despite longstanding research on human-technology interaction and diverse theoretical approaches describing competences for interacting with digital systems, research still offers mixed results regarding the structure of digital competences. Self-efficacy is described as one of the most critical determinants of competent digital system use, and various self-report scales for assessing digital self-efficacy have been suggested. Yet, these scales largely differ in their proposed specificity, structure, validation, and timeliness. The present study aims at providing a systematic overview and comparison of existing measures of digital self-efficacy (DSE) to current theoretical digital competence frameworks. Further, we present a newly developed scale that assesses digital self-efficacy in heterogeneous adult populations, theoretically founded in the DigComp 2.1 and social-cognition theory. The factorial structure of the DSE scale is assessed to investigate multidimensionality. Further, the scale is validated considering the nomological network (actual ICT use, technophobia). Implications for research and practice are discussed

    MULTITTRUST - Multidisciplinary Perspectives on Human-AI Team Trust (Preface)

    No full text
    This preface summarises the first Workshop on Multidisciplinary Perspectives on Human-AI Team Trust (MULTITTRUST 2023), co-located with 2nd International Conference on Hybrid Human-Artificial Intelligence (HHAI 2023), held on June 26th 2023 in Munich, Germany

    The role of agent autonomy in using decision support systems at work

    Get PDF
    Digitalization of work leads to ever-increasing information processing requirements for employees. Agent-based decision support systems (DSS) can assist employees in information processing tasks and decrease processing requirements. With increasing system capabilities, agency between the user and the system shifts, with high autonomy DSS being able to take over complete information processing tasks. In the present study, we distinguish degrees of DSS autonomy, operationalized by levels of automation (LOA), the delegation of task processing stages, and user control. In two vignette studies, we investigate the effects of DSS autonomy on perceptions of information load reduction, technostress, and user intention as well as the moderating role of technology and job experience. With high DSS autonomy, participants reported higher levels of information load reduction and technostress as well as lower levels of user intention. Job experience was a significant moderator. For high autonomy DSS, participants in the high job experience condition indicated greater information load reducation, lower technostress, and higher user intentions. Results suggest, that while being beneficial for decreasing information load, high DSS autonomy may negatively impact technostress and user intentions. It is suggested that technology and job training may improve user reactions

    Trust Dispersion and Effective Human-AI Collaboration: The Role of Psychological Safety (Short Paper)

    No full text
    Trust is a crucial factor in team performance for human-human and human-AI teams. While research made significant advancements in uncovering factors that affect the human decision to trust their AI teammate, it disregards the potential dynamics of trust in teams with multiple team members. To address this gap, we propose that trust in AI is an emergent state that can be differentiated on the individual and team level. We highlight the importance of considering the dispersion of trust levels in human-AI teams to understand better how trust influences team performance. Furthermore, we transfer the concept of psychological safety from human psychology literature and propose its role in buffering the potential adverse effects of dispersed trust attitudes

    MULTITTRUST - Multidisciplinary Perspectives on Human-AI Team Trust

    No full text
    This preface summarises the first Workshop on Multidisciplinary Perspectives on Human-AI Team Trust (MULTITTRUST 2023), co-located with 2nd International Conference on Hybrid Human-Artificial Intelligence (HHAI 2023), held on June 26th 2023 in Munich, Germany.Interactive Intelligenc

    Virtual Work Communication During a Pandemic—The Moderating Effect of Technology Expertise on Technology Overload

    Get PDF
    The coronavirus disease (COVID-19) pandemic and its accompanying restrictive measures have led to a sudden digitalization of all areas of work and to many knowledge workers now working entirely from home. Especially, the use of information and communication technologies (ICT) has been associated with negative outcomes such as technology overload. Interacting with technology is dynamic and employees often have to face negative ICT events that are related to the technology’s characteristics (e.g., system reliability). In this preregistered study, we aimed to link ICT events with employees’ technology overload during a phase of intensive telework. In a daily diary study over the course of 2 weeks, we investigated how ICT events impact technology overload. Additionally, we explored how technology overload as well as professional isolation due to current pandemic-related restrictions impacts employee strain. Multilevel regression modeling was used to explore the described relationships. ICT events were a significant predictor of technology overload and a significant interaction effect of objective technology expertise was found. Technology overload further impacts ICT-related strain. No significant effects were found regarding professional isolation. Gaining a better understanding of the relationship between ICT events, technology overload, and technology expertise during a phase of extensive telework will help to develop training and support for employees to improve their interaction with virtual communication systems during times of social distancing and beyond

    Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework

    No full text
    Intelligent systems are increasingly entering the workplace, gradually moving away from technologies supporting work processes to artificially intelligent (AI) agents becoming team members. Therefore, a deep understanding of effective human-AI collaboration within the team context is required. Both psychology and computer science literature emphasize the importance of trust when humans interact either with human team members or AI agents. However, empirical work and theoretical models that combine these research fields and define team trust in human-AI teams are scarce. Furthermore, they often lack to integrate central aspects, such as the multilevel nature of team trust and the role of AI agents as team members. Building on an integration of current literature on trust in human-AI teaming across different research fields, we propose a multidisciplinary framework of team trust in human-AI teams. The framework highlights different trust relationships that exist within human-AI teams and acknowledges the multilevel nature of team trust. We discuss the framework’s potential for human-AI teaming research and for the design and implementation of trustworthy AI team members

    Piecing Together the Puzzle: Understanding Trust in Human-AI Teams (Short Paper)

    No full text
    With the increasing adoption of Artificial intelligence (AI) as a crucial component of business strategy, establishing trust between humans and AI teammates remains a key issue. The project “We are in this together” highlights current theories on trust in Human-AI teams (HAIT) and proposes a research model that integrates insights from Industrial and Organizational Psychology, Human Factors Engineering, Human-Computer Interaction, and Computer Science. The proposed model suggests that in HAIT, trust involves multiple actors and is critical for team success. We present three main propositions for understanding trust in HAIT collaboration, focused on trustworthiness and trustworthiness reactions in interpersonal relationships between humans and AI teammates. We further suggest that individual, technological, and environmental factors impact trust relationships in HAIT. The project aims to contribute in developing effective HAIT by proposing a research model of trust in HAI

    Piecing Together the Puzzle: Understanding Trust in Human-AI Teams

    No full text
    With the increasing adoption of Artificial intelligence (AI) as a crucial component of business strategy, establishing trust between humans and AI teammates remains a key issue. The project “We are in this together” highlights current theories on trust in Human-AI teams (HAIT) and proposes a research model that integrates insights from Industrial and Organizational Psychology, Human Factors Engineering, Human-Computer Interaction, and Computer Science. The proposed model suggests that in HAIT, trust involves multiple actors and is critical for team success. We present three main propositions for understanding trust in HAIT collaboration, focused on trustworthiness and trustworthiness reactions in interpersonal relationships between humans and AI teammates. We further suggest that individual, technological, and environmental factors impact trust relationships in HAIT. The project aims to contribute in developing effective HAIT by proposing a research model of trust in HAIT.Interactive Intelligenc
    corecore